Sparsity and the truncated `-norm

نویسنده

  • Lee H. Dicker
چکیده

Sparsity is a fundamental topic in highdimensional data analysis. Perhaps the most common measures of sparsity are the `norms, for 0 ≤ p < 2. In this paper, we study an alternative measure of sparsity, the truncated `-norm, which is related to other `-norms, but appears to have some unique and useful properties. Focusing on the ndimensional Gaussian location model, we derive exact asymptotic minimax results for estimation over truncated `-balls, which complement existing results for `-balls. We then propose simple new adaptive thresholding estimators that are inspired by the truncated `-norm and are adaptive asymptotic minimax over `-balls (0 ≤ p < 2), as well as truncated `-balls. Finally, we derive lower bounds on the Bayes risk of an estimator, in terms of the parameter’s truncated `-norm. These bounds provide necessary conditions for Bayes risk consistency in certain problems that are relevant for high-dimensional Bayesian modeling.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Prediction of blood cancer using leukemia gene expression data and sparsity-based gene selection methods

Background: DNA microarray is a useful technology that simultaneously assesses the expression of thousands of genes. It can be utilized for the detection of cancer types and cancer biomarkers. This study aimed to predict blood cancer using leukemia gene expression data and a robust ℓ2,p-norm sparsity-based gene selection method. Materials and Methods: In this descriptive study, the microarray ...

متن کامل

Large-scale Inversion of Magnetic Data Using Golub-Kahan Bidiagonalization with Truncated Generalized Cross Validation for Regularization Parameter Estimation

In this paper a fast method for large-scale sparse inversion of magnetic data is considered. The L1-norm stabilizer is used to generate models with sharp and distinct interfaces. To deal with the non-linearity introduced by the L1-norm, a model-space iteratively reweighted least squares algorithm is used. The original model matrix is factorized using the Golub-Kahan bidiagonalization that proje...

متن کامل

Speech Enhancement using Adaptive Data-Based Dictionary Learning

In this paper, a speech enhancement method based on sparse representation of data frames has been presented. Speech enhancement is one of the most applicable areas in different signal processing fields. The objective of a speech enhancement system is improvement of either intelligibility or quality of the speech signals. This process is carried out using the speech signal processing techniques ...

متن کامل

A NOVEL FUZZY-BASED SIMILARITY MEASURE FOR COLLABORATIVE FILTERING TO ALLEVIATE THE SPARSITY PROBLEM

Memory-based collaborative filtering is the most popular approach to build recommender systems. Despite its success in many applications, it still suffers from several major limitations, including data sparsity. Sparse data affect the quality of the user similarity measurement and consequently the quality of the recommender system. In this paper, we propose a novel user similarity measure based...

متن کامل

Supplementary material : Sparsity and the truncated ` 2 - norm

We require a basic lemma on soft-threhsolding before proceeding with the proof of Theorem 1. Recall the soft-thresholding function s λ (x) = sign(x){(|x| − λ) ∨ 0} (x ∈ R, λ ≥ 0) and define r(λ; θ) = E θ {s λ (x) − θ} 2 , θ ∈ R, λ ≥ 0 to be the risk of soft-thresholding in the 1-dimensional problem, where x ∼ N (θ, 1). The following result is essentially contained in (Johnstone, 2013).

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2014